interpolation approach based on all the regressed means.

lishing a regression function is a process of minimising the total

n error. For any set of collected data points, there will be a unique

ressed means. The regressed means correspond to the estimated

n model parameters, i.e., ߙ and ߚ. A linear regression function

d as a straight line is thus parameterised by these two parameters.

hod is called the least squared error method (LSE) developed in

entury [Stigler, 1986].

ic assumption of LSE is that the responses (the y values) or the

n errors of an OLR model follows a Gaussian distribution. If

ߪ, thus ߝൌሺݕොെݕሻ~࣡ሺߤ, ߪ as well. Given a mean and a

of a Gaussian distribution for a data set with N data points, a

d function is defined as

ෑ݌ሺߝ

௡ୀଵ

1

ߪ√2ߨ

ෑexp ቆെ

ሺݕොെݕെߤሻ

௡ୀଵ

(4.8)

ying the negative logarithm to the likelihood function leads to

function shown below, where C is a constant,

ൌെ݈݋ࣦ݃ൌ෍ሺݕො

െݕെߤሻ

௡ୀଵ

൅ܥ

ൌ෍ሺݕො

െߙെߚݔെߤሻ

௡ୀଵ

൅ܥ

(4.9)

mising this error function (equivalently maximising the likelihood

results in an analytic solution of an OLR model. A vector-matrix

can be used to show the LSE solution to an OLR model. The

rameters are denoted by a vector shown below,

ܟൌሺߙ, ߚሻ

(4.10)

dependent variable is defined as a vector as well and is shown